Menu Top
Applied Mathematics for Class 11th & 12th (Concepts and Questions)
11th Concepts Questions
12th Concepts Questions

Applied Maths Class 12th Chapters (Concepts)
1. Numbers, Quantification and Numerical Applications 2. Matrices 3. Differentiation and Its Applications
4. Integration and Its Application 5. Differential Equations and Modeling 6. Probability Distribution
7. Inferential Statistics 8. Index Numbers and Time Based Data 9. Financial Mathematics
10. Linear Programming

Content On This Page
Introduction and Related Terminology Mathematical Formulation of Linear Programming Problem Different Types of Linear Programming Problems
Graphical Method of Solution for Problems in Two Variables Feasible and Infeasible Regions Feasible and Infeasible Solutions
Optimal Feasible Solution


Chapter 10 Linear Programming (Concepts)

Welcome to this advanced exploration within Financial Mathematics, a critical area of Applied Mathematics that builds significantly upon foundational concepts to furnish you with sophisticated quantitative tools. These instruments are indispensable for rigorous financial analysis and informed decision-making, processes crucial in modern commerce, finance, and investment management. This chapter transitions from introductory ideas to more complex applications, focusing on the valuation of cash flows over time, the mechanics of debt repayment, asset valuation, and project profitability assessment. Mastering these advanced techniques provides the analytical framework necessary to navigate intricate financial scenarios, evaluate investment opportunities, manage financial obligations effectively, and make strategic decisions grounded in quantitative reasoning. The emphasis is firmly on practical application and developing a deeper intuition for the time value of money and risk assessment.

While revisiting Compound Interest, we delve into scenarios demanding greater flexibility, potentially involving varying interest rates across different periods or dealing with non-standard time intervals, requiring a nuanced application of the core accumulation principle. However, the central pillar of this advanced treatment is the comprehensive study of Annuities – sequences of regular, equal payments. Our analysis extends significantly beyond simple cases to encompass various types:

Calculating the Future Value (FV), $FV = R \times \frac{(1+i)^n - 1}{i}$, and Present Value (PV), $PV = R \times \frac{1 - (1+i)^{-n}}{i}$ (formulas for ordinary annuities), remains fundamental, but the applications explored are more extensive, including sophisticated retirement planning models, the valuation of complex income streams, and the strategic use of Sinking Funds for accumulating specific future sums for purposes like debt redemption or major asset replacement.

A highly practical and significant application area covered in detail is Loan Amortization. We move beyond simply understanding loan payments to calculating the precise Equated Periodic Installment (like EMI) required to fully repay a loan over its term, utilizing the present value of an ordinary annuity formula. A key skill developed is the construction and interpretation of an amortization schedule, a table that meticulously breaks down each payment into its interest and principal repayment components, clearly illustrating how the outstanding loan balance diminishes over time. This provides transparent insight into debt management. Complementing this is the analysis of Sinking Funds, viewed as the inverse problem – accumulating funds through regular deposits (an FV application) to meet a future obligation.

Furthermore, we introduce the fundamental concepts of Bond Valuation. Basic bond terminology is explained (face value, coupon rate, maturity date), and the core principle of valuation is established: a bond's current price is the present value of all its expected future cash flows. This typically involves discounting the stream of future fixed coupon payments (using the PV of annuity formula) and the final redemption value (using the PV of a single sum formula) back to the present using an appropriate market discount rate or yield. While advanced depreciation methods might be conceptually mentioned, a crucial culmination of the chapter lies in introducing quantitative methods for Investment Appraisal. We focus on calculating the Net Present Value (NPV), defined as the present value of expected future cash inflows generated by an investment minus the initial cost: $NPV = \sum \limits_{t=1}^{n} \frac{\text{Cash Inflow}_t}{(1+k)^t} - \text{Initial Investment}$. A positive NPV typically signals a worthwhile investment. The concept of the Internal Rate of Return (IRR), the discount rate ($k$) at which the NPV equals zero, may also be introduced as another key metric for assessing project profitability and comparing investment alternatives. These powerful quantitative tools are essential for modern financial analysis and strategic capital budgeting.



Introduction and Related Terminology


In the realm of business, economics, engineering, and various other fields, decision-makers are constantly faced with the challenge of making optimal choices when resources are limited and there are competing demands. For instance, a manufacturer needs to decide how many units of different products to produce given limited raw materials, labour hours, and machine time, with the goal of maximizing total profit. A company might need to minimize the cost of transporting goods from factories to warehouses while respecting vehicle capacity and demand at each warehouse. Linear Programming (LP) is a powerful mathematical technique developed to tackle precisely these types of optimization problems, where the relationships between variables are linear.

Linear Programming is a mathematical method used to find the best possible outcome (maximum or minimum value) of a mathematical model whose requirements are represented by linear equations or inequalities. It is a cornerstone of operations research and has wide-ranging applications in various industries.

The term "Linear" refers to the fact that all the mathematical relationships in the model (the function to be optimized and the limitations) are linear. The term "Programming" here refers to planning or scheduling activities, not computer programming.


Core Idea of Linear Programming

The fundamental idea behind linear programming is to model a real-world optimization problem using a set of linear mathematical expressions and then solve this model to find the values of the decision variables that yield the optimal value for the objective function while satisfying all the constraints.

A linear programming problem involves:


Key Terminology in Linear Programming

Understanding the following terms is essential for formulating and solving Linear Programming Problems (LPPs):

Decision Variables

These are the unknown quantities that we need to determine the values for in order to find the optimal solution. They represent the levels of activities or choices to be made. For example, in a problem about producing furniture, the decision variables could be:

Decision variables must be clearly defined and quantifiable. They are typically denoted by algebraic symbols such as $x_j$, where $j=1, 2, \dots, n$, and $n$ is the number of variables. In this chapter, we will largely focus on problems with two decision variables ($x_1, x_2$ or $x, y$) to facilitate graphical solutions.

Objective Function

The Objective Function is the mathematical expression of the goal of the optimization problem. This function is a linear combination of the decision variables that we want to either maximize (e.g., profit, sales) or minimize (e.g., cost, time).

It is typically written in the form:

Optimize $Z = c_1x_1 + c_2x_2 + \dots + c_nx_n$

... (1)

Here, $Z$ represents the value to be optimized (e.g., total profit or cost). The coefficients $c_j$ are constants representing the per-unit contribution of the decision variable $x_j$ to the objective (e.g., profit per chair, cost per table). The optimization can be either maximization or minimization.

Constraints

Constraints are limitations or restrictions that impose boundaries on the values that the decision variables can take. These constraints arise from the limited availability of resources (like raw materials, machine hours, labor), technological limitations, marketing considerations, quality requirements, or other conditions specific to the problem. Constraints are expressed as linear equalities or inequalities involving the decision variables.

A general form of a linear constraint is:

$a_{i1}x_1 + a_{i2}x_2 + \dots + a_{in}x_n \quad \{\le, =, \ge\} \quad b_i$

for $i = 1, 2, \dots, m$

... (2)

where $a_{ij}$ are constant coefficients (e.g., amount of resource $i$ required per unit of $x_j$), and $b_i$ are constants representing the limit or requirement for the $i$-th constraint (e.g., total available quantity of resource $i$). The symbol $\{\le, =, \ge\}$ indicates that a constraint can be a less than or equal to inequality, an equality, or a greater than or equal to inequality. There are $m$ such constraints.

Non-Negativity Constraints

In almost all practical linear programming problems, the decision variables represent quantities that cannot be negative, such as the number of items produced, the quantity of resources used, the number of people employed, etc. These restrictions are specified as Non-Negativity Constraints.

$x_j \ge 0$

for $j = 1, 2, \dots, n$

... (3)

These constraints are fundamental and are usually implicitly understood, but they must be explicitly included in the mathematical formulation of the LPP. They confine the possible values of the decision variables to the first quadrant in a two-variable problem, or the non-negative orthant in higher dimensions.

A Linear Programming Problem is completely defined by its objective function, the set of constraints, and the non-negativity restrictions. The goal is to find the set of values for the decision variables that satisfies all constraints (including non-negativity) and simultaneously optimizes the objective function.



Mathematical Formulation of Linear Programming Problem


The first and often the most crucial step in solving a real-world optimization problem using Linear Programming (LP) is to translate the problem description into a concise mathematical model. This process is called Mathematical Formulation of the Linear Programming Problem (LPP). A well-formulated LPP consists of a clear objective function to be optimized, a set of linear constraints that define the feasible region, and non-negativity restrictions on the decision variables.


Steps in Mathematical Formulation

Formulating a Linear Programming Problem from a verbal description typically involves the following systematic steps:

  1. Understand the Problem: Read the problem statement carefully to grasp the context, identify the decisions that need to be made, determine what needs to be optimized (maximized or minimized), and recognize all the limitations or requirements.
  2. Identify and Define Decision Variables: Determine the quantities whose values must be decided to achieve the objective. Assign algebraic symbols (like $x_1, x_2, \dots, x_n$, or $x, y$ for two variables) to represent these decision variables. Clearly state what each variable represents in the context of the problem. For instance, "Let $x_1$ be the number of units of Product A to be produced per week," or "Let $y$ be the number of kilograms of Ingredient B to be used."
  3. Formulate the Objective Function: Identify the objective (maximize profit, minimize cost, maximize production, etc.) and express it as a linear function of the decision variables. Write down the objective function explicitly in the form "Optimize $Z = c_1x_1 + c_2x_2 + \dots + c_nx_n$", indicating whether it is a maximization or minimization problem. The coefficients $c_j$ are the contribution of each unit of $x_j$ to the objective.
  4. Formulate the Constraints: Identify all the limitations, restrictions, or conditions given in the problem. Each constraint must be expressed as a linear equality or inequality involving the decision variables. Write down each constraint separately. Remember that constraints can be of the form $\le$, $\ge$, or $=$.
  5. Add Non-Negativity Constraints: In most practical applications, the decision variables represent physical quantities (like number of items, amount of material) that cannot be negative. Include constraints that specify that each decision variable must be greater than or equal to zero, i.e., $x_j \ge 0$ for all $j$.


Standard Form of an LPP

Once formulated, a Linear Programming Problem is typically presented in a standard structure:

Objective: Optimize (Maximize or Minimize) $Z = c_1x_1 + c_2x_2 + \dots + c_nx_n$

Subject to:

Constraints:

$a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n \quad \{\le, =, \ge\} \quad b_1$

$a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n \quad \{\le, =, \ge\} \quad b_2$

$\vdots$

$a_{m1}x_1 + a_{m2}x_2 + \dots + a_{mn}x_n \quad \{\le, =, \ge\} \quad b_m$

Non-Negativity Constraints:

$x_j \ge 0$

for $j = 1, 2, \dots, n$

In the context of problems solvable by the graphical method (with two variables), the formulation will involve $x_1$ and $x_2$ (or $x$ and $y$).


Example Formulation

Example 1. A manufacturer produces two types of toys, Toy A and Toy B. Toy A requires 2 hours of cutting and 1 hour of assembling. Toy B requires 1 hour of cutting and 3 hours of assembling. The factory has a maximum of 40 hours of cutting time and 30 hours of assembling time available per week. The profit on Toy A is $\textsf{₹}\$50$ and on Toy B is $\textsf{₹}\$60$. Formulate a Linear Programming Problem to determine the number of toys of each type that should be produced per week to maximize the total profit.

Answer:

Solution:

Step 1: Understand the Problem

The manufacturer wants to decide how many units of Toy A and Toy B to produce weekly. The goal is to maximize the total profit. The limitations are the available cutting time and assembling time.

Step 2: Define Decision Variables

Let $x$ be the number of units of Toy A produced per week. Let $y$ be the number of units of Toy B produced per week.

Step 3: Formulate the Objective Function

The objective is to maximize the total profit. Profit from one unit of Toy A is $\textsf{₹}\$50$. Total profit from $x$ units of Toy A is $50x$. Profit from one unit of Toy B is $\textsf{₹}\$60$. Total profit from $y$ units of Toy B is $60y$. Total Profit $Z = 50x + 60y$.

Maximize $Z = 50x + 60y$

[Objective Function]

Step 4: Formulate the Constraints

Identify the constraints based on resource availability:

Cutting Time Constraint: Toy A requires 2 hours of cutting per unit. Total cutting time for $x$ units of Toy A is $2x$. Toy B requires 1 hour of cutting per unit. Total cutting time for $y$ units of Toy B is $1y = y$. Total cutting time used is $2x + y$. Maximum available cutting time is 40 hours. The constraint is: Total cutting time used $\le$ Maximum available cutting time.

$2x + y \le 40$

[Cutting Constraint] ... (1)

Assembling Time Constraint: Toy A requires 1 hour of assembling per unit. Total assembling time for $x$ units of Toy A is $1x = x$. Toy B requires 3 hours of assembling per unit. Total assembling time for $y$ units of Toy B is $3y$. Total assembling time used is $x + 3y$. Maximum available assembling time is 30 hours. The constraint is: Total assembling time used $\le$ Maximum available assembling time.

$x + 3y \le 30$

[Assembling Constraint] ... (2)

Step 5: Add Non-Negativity Constraints

Since the number of toys produced cannot be negative:

$x \ge 0$

[Non-negativity Constraint for Toy A]

$y \ge 0$

[Non-negativity Constraint for Toy B]

These are typically combined and written together.

$x \ge 0, y \ge 0$

... (3)

Summary of LPP Formulation:

The Linear Programming Problem is formulated as follows:

Maximize $Z = 50x + 60y$

Subject to the constraints:

$2x + y \le 40$

... (1)

$x + 3y \le 30$

... (2)

$x \ge 0, y \ge 0$

... (3)

This mathematical model now represents the manufacturer's problem and can be solved using appropriate LP techniques (like the graphical method for this two-variable problem).



Different Types of Linear Programming Problems


Linear Programming (LP) is a versatile mathematical tool applicable to a wide array of real-world situations that involve optimizing a linear objective function subject to linear constraints. The specific nature of the problem depends on the context from which the decision variables, objective function, and constraints are derived. While the underlying mathematical structure (maximizing or minimizing a linear function subject to linear constraints and non-negativity) remains the same, problems are often categorized based on their practical application domain. Understanding these different types helps recognize situations where LP can be effectively applied.


Common Types of Linear Programming Problems

1. Product Mix Problems

These are among the most common types of LPPs, frequently encountered in manufacturing and production planning. The goal is to determine the optimal quantity of each product that should be produced, given limited resources (such as raw materials, labor hours, machine time, production capacity, etc.), to maximize profit or minimize production cost.

In a typical product mix problem:

The example presented in the formulation section (Example 1, where a workshop produces decorative items P and Q) is a classic product mix problem.


2. Diet or Nutritional Problems

These problems aim to determine the most economical way to create a diet or mix of ingredients while meeting specific nutritional requirements. They are common in food production, agriculture, and even for planning human or animal diets.

In a typical diet problem:


3. Transportation Problems

Transportation problems involve deciding the optimal plan for transporting goods from multiple sources (e.g., factories, warehouses) to multiple destinations (e.g., distribution centers, retail stores). The objective is typically to minimize the total transportation cost, subject to the supply available at each source and the demand required at each destination.

In a typical transportation problem:


4. Assignment Problems

Assignment problems are a special type of allocation problem where tasks are assigned to resources on a one-to-one basis. For instance, assigning workers to jobs, machines to tasks, or projects to teams. The goal is to make assignments that optimize an objective, such as minimizing the total cost of completing all tasks or maximizing the total effectiveness.

In a typical assignment problem:

While sometimes solved using specific assignment algorithms, they can be formulated and solved as LPPs.


5. Blending Problems

Blending problems occur when different raw materials or components with varying properties and costs are mixed together to produce a final product that meets certain specifications at minimum cost. These are common in industries like petroleum refining (blending crude oils to produce gasoline), food and beverage production, and chemical manufacturing.

In a typical blending problem:


6. Financial Portfolio Management Problems

LP can be used in finance to help investors decide how to allocate their funds among various investment options (like stocks, bonds, mutual funds) to achieve a financial goal.

In a typical portfolio problem:

These examples demonstrate the wide applicability of Linear Programming. Regardless of the domain, the core challenge remains the same: optimizing a linear objective function subject to a set of linear constraints. The method of solving the LPP depends on the number of decision variables and constraints. For problems with two decision variables, the graphical method is a suitable and intuitive approach.



Graphical Method of Solution for Problems in Two Variables


The Graphical Method is a visual technique used to solve Linear Programming Problems (LPPs) that involve exactly two decision variables. It provides a clear geometric interpretation of the problem, allowing us to identify the set of all possible solutions and then find the one that optimizes the objective function. This method is not suitable for problems with three or more variables because it becomes difficult or impossible to represent the feasible region in a 3D graph or higher dimensions.

The core idea is to plot the constraints on a two-dimensional graph to determine the region of feasible solutions and then use the objective function to find the point within this region that yields the optimal value.


Steps for Solving LPPs Using the Graphical Method

To solve a Linear Programming Problem with two decision variables (let's call them $x$ and $y$) using the graphical method, follow these steps:

  1. Formulate the LPP: Ensure the problem is mathematically formulated in the standard LPP form, consisting of:
    • The objective function (Maximize or Minimize $Z = c_1x + c_2y$).
    • A set of linear constraints (e.g., $a_1x + b_1y \le d_1$, $a_2x + b_2y \ge d_2$, etc.).
    • Non-negativity constraints ($x \ge 0, y \ge 0$).
    Make sure there are only two decision variables.
  2. Plot the Constraints on a Graph:
    • Draw a two-dimensional graph with $x$ on the horizontal axis and $y$ on the vertical axis. Since we usually have non-negativity constraints ($x \ge 0, y \ge 0$), focus on the first quadrant.
    • For each linear inequality constraint (e.g., $Ax + By \le C$ or $Ax + By \ge C$), first treat it as a linear equation ($Ax + By = C$) and draw the corresponding straight line on the graph. To draw the line, find at least two points that satisfy the equation. The easiest points to find are usually the intercepts with the axes (by setting $x=0$ and finding $y$, and setting $y=0$ and finding $x$).
    • If a constraint is an equality ($Ax + By = C$), any feasible solution must lie exactly on the segment of this line within the feasible region.
  3. Identify the Feasible Region:
    • For each inequality constraint, the line drawn divides the plane into two half-planes. To determine which half-plane is the feasible region for that specific constraint, pick a test point not on the line (the origin (0,0) is convenient if the line does not pass through it). Substitute the coordinates of the test point into the original inequality.
      • If the test point satisfies the inequality, then the half-plane containing the test point is the feasible region for that constraint.
      • If the test point does not satisfy the inequality, then the other half-plane is the feasible region.
      Indicate the feasible side for each constraint (e.g., by shading or drawing arrows).
    • The non-negativity constraints ($x \ge 0, y \ge 0$) mean that the feasible region must lie within the first quadrant.
    • The Feasible Region for the entire LPP is the area on the graph that satisfies all constraints simultaneously. It is the intersection of all the feasible half-planes and the first quadrant. This region will be a convex polygon (possibly bounded or unbounded). Any point within this region is a feasible solution.
  4. Find the Corner Points (Vertices) of the Feasible Region: The vertices of the feasible region are the points where the boundary lines of the region intersect. Identify the coordinates of all such corner points. These include intersections of constraint lines with the axes, and intersections of pairs of constraint lines with each other, provided these intersection points lie on the boundary of the feasible region. The coordinates of an intersection point are found by solving the system of the two linear equations corresponding to the intersecting lines.
  5. Evaluate the Objective Function at Each Corner Point: Calculate the value of the objective function $Z = c_1x + c_2y$ by substituting the $(x, y)$ coordinates of each corner point into the objective function equation.
  6. Determine the Optimal Solution:
    • For a maximization problem, the corner point that yields the highest value of $Z$ is the optimal solution, and this highest value is the maximum value of $Z$.
    • For a minimization problem, the corner point that yields the lowest value of $Z$ is the optimal solution, and this lowest value is the minimum value of $Z$.
    According to the Fundamental Theorem of Linear Programming (or Corner Point Theorem), if an optimal solution exists for a bounded feasible region, it will occur at one of the corner points. If the feasible region is unbounded, an optimal solution might exist at a corner point, or it might not exist (e.g., for maximization in an unbounded region where Z can increase indefinitely).


Example Solution

Example 1. Solve the following Linear Programming Problem using the graphical method:

Maximize $Z = 80x + 100y$

Subject to the constraints:

$3x + 4y \le 40$

... (1)

$x + 2y \le 15$

... (2)

$x \ge 0, y \ge 0$

... (3)

Answer:

Given:

A Linear Programming Problem with objective function Maximize $Z = 80x + 100y$ and constraints as listed above.

To Find:

The optimal solution (values of $x$ and $y$ that maximize $Z$) using the graphical method.

Solution:

Step 1: Formulation

The problem is already formulated with two decision variables $x$ and $y$, a linear objective function, and linear inequality constraints including non-negativity.

Step 2: Plot the Constraints

Plot the lines corresponding to the equality version of each inequality constraint:

  • Constraint (1): $3x + 4y \le 40$. Plot the line $L_1: 3x + 4y = 40$.
    • If $x = 0$, $4y = 40 \implies y = 10$. Point: (0, 10).
    • If $y = 0$, $3x = 40 \implies x = 40/3 \approx 13.33$. Point: (40/3, 0).
    • Plot the line passing through (0, 10) and (40/3, 0).
  • Constraint (2): $x + 2y \le 15$. Plot the line $L_2: x + 2y = 15$.
    • If $x = 0$, $2y = 15 \implies y = 7.5$. Point: (0, 7.5).
    • If $y = 0$, $x = 15$. Point: (15, 0).
    • Plot the line passing through (0, 7.5) and (15, 0).

Step 3: Identify the Feasible Region

Check the inequality for each constraint using the origin (0,0) as a test point:

  • Constraint (1): $3x + 4y \le 40$. At (0,0): $3(0) + 4(0) \le 40 \implies 0 \le 40$, which is True. The feasible region for $3x + 4y \le 40$ is the half-plane containing the origin (below the line $L_1$).
  • Constraint (2): $x + 2y \le 15$. At (0,0): $0 + 2(0) \le 15 \implies 0 \le 15$, which is True. The feasible region for $x + 2y \le 15$ is the half-plane containing the origin (below the line $L_2$).
  • Non-negativity constraints (3): $x \ge 0, y \ge 0$. This restricts the feasible region to the first quadrant (where $x$ is non-negative and $y$ is non-negative).

The feasible region for the LPP is the area in the first quadrant that is below or on line $L_1$ AND below or on line $L_2$. This region is a convex polygon.

Graph showing the feasible region for the example LPP, bounded by the lines 3x+4y=40, x+2y=15, x=0, and y=0 in the first quadrant.

Step 4: Find the Corner Points (Vertices)

The vertices of the feasible region are the points where the boundary lines intersect. From the graph, the vertices are:

  • O: The origin (0, 0). This is the intersection of $x \ge 0$ and $y \ge 0$.
  • A: The intersection of the line $x = 0$ (y-axis) and the line $L_2: x + 2y = 15$. Substitute $x=0$ into $x + 2y = 15$: $0 + 2y = 15 \implies y = 7.5$. Point A is (0, 7.5).
  • B: The intersection of the lines $L_1: 3x + 4y = 40$ and $L_2: x + 2y = 15$. We solve this system of linear equations:

    $3x + 4y = 40$

    ... (a)

    $x + 2y = 15$

    ... (b)

    From equation (b), we can express $x$ as $x = 15 - 2y$. Substitute this into equation (a):

    $3(15 - 2y) + 4y = 40$

    $45 - 6y + 4y = 40$

    $45 - 2y = 40$

    $-2y = 40 - 45$

    $-2y = -5 \implies y = 2.5$

    Now substitute $y = 2.5$ back into equation (b) to find $x$:

    $x + 2(2.5) = 15$

    $x + 5 = 15 \implies x = 10$

    Point B is (10, 2.5).
  • C: The intersection of the line $y = 0$ (x-axis) and the line $L_1: 3x + 4y = 40$ (as $L_1$ is the boundary segment on the x-axis within the feasible region, not $L_2$). Substitute $y=0$ into $3x + 4y = 40$: $3x + 4(0) = 40 \implies 3x = 40 \implies x = 40/3$. Point C is (40/3, 0).

The corner points of the feasible region are O(0, 0), A(0, 7.5), B(10, 2.5), and C(40/3, 0).

Step 5: Evaluate the Objective Function at Corner Points

Evaluate $Z = 80x + 100y$ at each corner point:

  • At O(0, 0): $Z = 80(0) + 100(0) = 0$
  • At A(0, 7.5): $Z = 80(0) + 100(7.5) = 750$
  • At B(10, 2.5): $Z = 80(10) + 100(2.5) = 800 + 250 = 1050$
  • At C(40/3, 0): $Z = 80(40/3) + 100(0) = 3200/3 \approx 1066.67$

Step 6: Determine the Optimal Solution

We want to maximize $Z$. Comparing the values of $Z$ at the corner points (0, 750, 1050, 1066.67), the maximum value is approximately 1066.67.

The maximum value of $Z$ is $\frac{3200}{3} \approx 1066.67$.

This maximum value occurs at the corner point C(40/3, 0).

The optimal solution is $x = \frac{40}{3}, y = 0$.

In the context of the problem (Example from Section I2), this means the workshop should produce approximately 13.33 units of item P and 0 units of item Q to maximize profit, yielding a maximum profit of approximately $\textsf{₹}\$1066.67$. Note that if the problem required integer solutions, Integer Linear Programming methods would be needed, but standard LP allows for fractional solutions.



Feasible Region and Infeasible Region


When solving a Linear Programming Problem (LPP), especially using the graphical method, understanding the concepts of the feasible region and the infeasible region is fundamental. These concepts provide the geometric basis for identifying the potential solutions to the problem.


Feasible Region (Solution Space)

The Feasible Region (also known as the feasible set, solution space, or constraint region) is the set of all points or combinations of decision variables that satisfy all the constraints of the Linear Programming Problem simultaneously. This includes all the linear inequality/equality constraints and the non-negativity constraints.

Graphically, for a 2-variable LPP (with variables $x$ and $y$), each linear inequality constraint ($Ax + By \le C$ or $Ax + By \ge C$) defines a half-plane on the $xy$-plane. The non-negativity constraints ($x \ge 0, y \ge 0$) restrict the area to the first quadrant. The feasible region is the area where all these half-planes (and any lines for equality constraints) overlap. It is the intersection of the regions defined by each individual constraint.

Properties of the Feasible Region in an LPP:

  1. Convexity: The feasible region of an LPP is always a convex set. A set is convex if for any two points within the set, the entire straight line segment connecting these two points also lies entirely within the set. This property is crucial because it guarantees that any local optimum is also a global optimum.
  2. Polyhedral Shape: For LPPs, the feasible region is a polytope (a polygon in 2 dimensions, a polyhedron in 3 dimensions, or a generalized shape in higher dimensions). It is formed by the intersection of half-spaces defined by the linear constraints.
  3. Bounded or Unbounded: The feasible region can be:
    • Bounded: The region is enclosed within a finite area. It does not extend infinitely in any direction. Graphically, it looks like a closed polygon.
    • Unbounded: The region extends infinitely in one or more directions. Graphically, it is not completely enclosed; it might have edges that go off to infinity.
  4. Empty: In some cases, the set of constraints might be contradictory, meaning there is no point that satisfies all constraints simultaneously. In this case, the feasible region is an empty set.

Visual examples in 2D:

Diagram showing a polygon as a bounded feasible region and an open area as an unbounded feasible region.

Importance of the Feasible Region

The feasible region is the set of all possible solutions that respect the problem's limitations. The optimal solution to the LPP (the point that maximizes or minimizes the objective function) must necessarily lie within the feasible region or on its boundary. Points outside the feasible region are irrelevant because they represent combinations of decision variables that are not permitted by the problem's constraints.


Infeasible Region

The Infeasible Region is the set of all points that do not belong to the feasible region. These are the points that violate at least one of the constraints of the LPP. Graphically, it is the area outside the boundaries of the feasible region.

Infeasible Problem (Empty Feasible Region)

An LPP is said to be infeasible if its feasible region is empty. This occurs when the constraints are inconsistent or contradictory, such that there is no single point that satisfies all of them simultaneously. An infeasible LPP has no solution because there are no points that meet all the problem's requirements.

Example of constraints that would lead to an infeasible region in 2D: $x + y \le 5$ $x + y \ge 10$ $x \ge 0, y \ge 0$ It is impossible for the sum of two non-negative numbers ($x+y$) to be simultaneously less than or equal to 5 and greater than or equal to 10. The two feasible half-planes for $x+y \le 5$ and $x+y \ge 10$ do not overlap, resulting in an empty intersection and thus an infeasible region.

Diagram showing two non-overlapping feasible regions for individual constraints, resulting in no common feasible region for the LPP.

Recognizing an infeasible problem is important. It indicates that the given set of conditions and objectives cannot be met simultaneously, suggesting that the problem formulation or the real-world scenario it represents needs to be re-evaluated.



Feasible and Infeasible Solutions


Following the definitions of feasible and infeasible regions, we can define what constitutes a feasible or infeasible solution in a Linear Programming Problem (LPP). While the region refers to the entire area or set defined by the constraints, a solution refers to a specific point or a particular set of values for the decision variables.


Feasible Solution

A specific set of values for the decision variables $(x_1, x_2, \dots, x_n)$ is called a feasible solution if it satisfies all the constraints of the LPP, including the non-negativity constraints.

In simpler terms, a feasible solution is any combination of the quantities represented by the decision variables that is possible given the limitations and requirements of the problem.

Geometrically, in a 2-variable problem with decision variables $x$ and $y$, a point $(x, y)$ is a feasible solution if it lies within or exactly on the boundary of the feasible region (the polygon or unbounded area determined by the constraints).

Example: Consider the constraints: $2x + y \le 10$, $x + y \le 7$, $x \ge 0, y \ge 0$. The point $(2, 3)$ is a feasible solution because:

Since $(2, 3)$ satisfies all constraints, it is a feasible solution. It would be a point inside or on the boundary of the feasible region on a graph.

Basic Feasible Solution (BFS)

While any point in the feasible region is a feasible solution, the corner points (vertices) of the feasible region are particularly important. In the context of algebraic methods like the Simplex method, these corner points correspond to Basic Feasible Solutions (BFS). A basic feasible solution is a feasible solution that is also a basic solution (a solution obtained by setting some variables to zero and solving the constraints as equations). The significance of BFS lies in the fact that, according to the Fundamental Theorem of Linear Programming, if an optimal solution exists, at least one optimal solution will be a basic feasible solution (i.e., it will occur at a corner point of the feasible region).


Infeasible Solution

A specific set of values for the decision variables $(x_1, x_2, \dots, x_n)$ is called an infeasible solution if it violates at least one of the constraints of the LPP.

Infeasible solutions are combinations of decision variables that are not possible or permitted according to the problem's requirements.

Geometrically, in a 2-variable problem, an infeasible solution is any point $(x, y)$ that lies outside the feasible region. Such a point violates one or more of the boundaries defined by the constraints.

Example: Consider the same constraints: $2x + y \le 10$, $x + y \le 7$, $x \ge 0, y \ge 0$. The point $(5, 5)$ is an infeasible solution because:

Even though $(5,5)$ satisfies non-negativity, it fails to satisfy the first two constraints, so it is an infeasible solution. It would lie outside the feasible region on a graph.

The ultimate goal in solving an LPP is not just to find any feasible solution, but to find the Optimal Feasible Solution – the feasible solution that gives the best possible value (maximum or minimum) for the objective function. This concept will be discussed in the next section.



Optimal Feasible Solution


The process of formulating a Linear Programming Problem (LPP) involves defining decision variables, an objective function, and constraints. The feasible region represents all possible combinations of decision variables that satisfy these constraints. The objective of solving the LPP is not just to find any feasible solution, but to find the "best" one. This "best" solution is known as the Optimal Feasible Solution.


Definition of Optimal Feasible Solution

An Optimal Feasible Solution is a feasible solution (a set of values for the decision variables that satisfies all constraints) that yields the optimal value for the objective function.

In other words, an optimal feasible solution is a point $(x_1, x_2, \dots, x_n)$ that lies within or on the boundary of the feasible region and, when substituted into the objective function $Z$, produces the maximum (or minimum) value among all points in the feasible region.


Fundamental Theorem of Linear Programming (Corner Point Theorem)

A critical theorem in Linear Programming, which provides the basis for the graphical and simplex methods, is the Fundamental Theorem of Linear Programming (or the Corner Point Theorem). It states:

If a Linear Programming Problem has a feasible region which is a bounded convex polygon (or polyhedron in higher dimensions), then the objective function has both a maximum and a minimum value on this region, and these optimal values occur at one or more of the corner points (vertices) of the feasible region.

Implications of the theorem:

If the feasible region is unbounded, an optimal solution may or may not exist. If it exists, it will still occur at a corner point. However, if the objective function can be improved indefinitely in the direction of the unbounded region, then no finite optimal solution exists.


Possible Outcomes of an LPP

When attempting to solve a Linear Programming Problem, one of several possible outcomes can occur:

  1. Unique Optimal Solution: The LPP has exactly one feasible solution that provides the single best value for the objective function. Graphically, this occurs when the objective function line (or plane) intersects the feasible region at a single corner point that yields the maximum/minimum value.
  2. Multiple Optimal Solutions (Alternate Optimal Solutions): The LPP has more than one feasible solution that yields the same best value for the objective function. This happens when the objective function line is parallel to one of the boundary edges of the feasible region, and that edge corresponds to the optimal value. In this case, both the two corner points forming that edge, and every point on the line segment connecting them, are optimal solutions.
  3. Unbounded Solution: This occurs in some LPPs with an unbounded feasible region.
    • For a maximization problem, if the feasible region is unbounded in a direction where the objective function value increases, then $Z$ can be made arbitrarily large. There is no finite maximum value.
    • For a minimization problem, if the feasible region is unbounded in a direction where the objective function value decreases, then $Z$ can be made arbitrarily small. There is no finite minimum value.
    Graphically, the feasible region extends infinitely, and moving along a path within this region increases (for maximization) or decreases (for minimization) the objective function value without limit.
  4. No Feasible Solution (Infeasible Problem): As discussed in Section I5, if the constraints are contradictory, the feasible region is empty. Since there are no points that satisfy all constraints, there are no feasible solutions, and therefore, no optimal feasible solution exists.

Identifying which of these outcomes applies to a given LPP is a crucial part of the solution process. For problems with two variables, the graphical method allows visual identification of the feasible region and its type (bounded/unbounded/empty), and subsequent evaluation at corner points to determine the unique, multiple, or unbounded optimal solution (if one exists).